Moderate: Red Hat Ceph Storage 6.1 security, enhancements, and bug fix update

Related Vulnerabilities: CVE-2023-2183   CVE-2023-2801  

Synopsis

Moderate: Red Hat Ceph Storage 6.1 security, enhancements, and bug fix update

Type/Severity

Security Advisory: Moderate

Red Hat Insights patch analysis

Identify and remediate systems affected by this advisory.

View affected systems

Topic

An update is now available for Red Hat Ceph Storage 6.1 in the Red Hat Ecosystem Catalog

Description

Red Hat Ceph Storage is a scalable, open, software-defined storage platform that combines the most stable version of the Ceph storage system with a Ceph management platform, deployment utilities, and support services.

These new packages include numerous enhancements, and bug fixes. Space precludes documenting all of these changes in this advisory. Users are directed to the Red Hat Ceph Storage Release Notes for information on the most significant of these changes:

https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/6.1/html/release_notes/index

Solution

Before applying this update, make sure all previously released errata
relevant to your system have been applied.

For details on how to apply this update, refer to:

https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/6

For supported configurations, refer to:

https://access.redhat.com/articles/1548993

Affected Products

  • Red Hat Enterprise Linux for x86_64 9 x86_64
  • Red Hat Enterprise Linux for IBM z Systems 9 s390x
  • Red Hat Enterprise Linux for Power, little endian 9 ppc64le

Fixes

  • BZ - 2181424 - [RGW] bucket stats output has incorrect num_objects in rgw.none and rgw.main on multipart upload
  • BZ - 2182385 - [rgw][rfe]: Object reindex tool should recover the index for 'versioned' buckets (6.1z3)
  • BZ - 2210840 - CVE-2023-2801 grafana: data source proxy race condition
  • BZ - 2210848 - CVE-2023-2183 grafana: missing access control allows test alerts by underprivileged user
  • BZ - 2211324 - [cee/sd][ceph-ansible] Cephadm-preflight playbook stops all the ceph services from node if older ceph rpms are present on the host.
  • BZ - 2213873 - [cee/sd][cephadm] osd_memory_target_autotune feature is not applying the tuned memory target.
  • BZ - 2227807 - snap-schedule: allow retention spec to specify max number of snaps to retain
  • BZ - 2227999 - client: issue a cap release immediately if no cap exists
  • BZ - 2228065 - mds: do not evict clients if OSDs are laggy
  • BZ - 2232663 - (RHCS 6.1) [Workload-DFG] ceph status is not reporting when an application is not enabled on pools
  • BZ - 2237881 - [6.1.z3 backport][cee/sd][BlueFS][RHCS 5.x] no BlueFS spillover health warning in RHCS 5.x
  • BZ - 2238666 - mds: blocklist clients with "bloated" session metadata
  • BZ - 2239449 - [RHCS-6.X backport] [RFE] BLK/Kernel: Improve protection against running one OSD twice
  • BZ - 2240143 - libcephsqlite may corrupt data from short reads
  • BZ - 2240838 - [6.1.z backport][RADOS] "currently delayed" slow ops does not provide details on why op has been delayed
  • BZ - 2241201 - PG auto-scaler configs on individual pools is changed after set & unset of "noautoscale" flag
  • BZ - 2243741 - [Stretch cluster] ceph is inaccessible after crash/shutdown tests are run
  • BZ - 2244978 - [Bluewash][IBM 6.1z2] [Live] [Cephadm-anisble] [Preflight playbook] Preflight playbook failing for IBM 6.1z2 live.
  • BZ - 2245147 - [rgw][indexless]: on Indexless placement, rgw daemon crashes with " ceph_assert(index.type == BucketIndexType::Normal)" (6.1)
  • BZ - 2245697 - radosgw-admin crashes when using --placement-id
  • BZ - 2247543 - [rbd_support] fix hangs and mgr crash when rbd_support module tries to recover from repeated blocklisting
  • BZ - 2249814 - userspace cephfs client can crash when upgrading from RHCS 6 to 7 (or from RHCS 5 -> 6)
  • BZ - 2249958 - update nfs-ganesha to V5.6 in RHCS 6.1 z3
  • BZ - 2252256 - kafka crashed during message callback
  • BZ - 2252337 - rgw: object lock retainUntilDate can overflow (32bit seconds)
  • BZ - 2252792 - rgw: object lock: governance mode override can be bypassed for multipart upload objects
  • BZ - 2252878 - [ceph-volume] ceph orch redeploys OSD with wrong dedicated DB size for non-collocated scenario